Chapter 10 – Introduction to Artificial Neural Networks

This notebook contains all the sample code and solutions to the exercises in chapter 10.

Setup

First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:

In [1]:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals

# Common imports
import numpy as np
import os

# to make this notebook's output stable across runs
def reset_graph(seed=42):
    tf.reset_default_graph()
    tf.set_random_seed(seed)
    np.random.seed(seed)

# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12

# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"

def save_fig(fig_id, tight_layout=True):
    path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
    print("Saving figure", fig_id)
    if tight_layout:
        plt.tight_layout()
    plt.savefig(path, format='png', dpi=300)

命題解決の手法の一つとして「脳の構造を模倣する = 人工ニューラルネット(ANN:artificial neural networks)」の研究が昔からされていた。

10.1.3 Perceptrons

In [2]:
from IPython.display import Image
Image("./images/単純パーセプトロン_r2.jpg")
Out[2]:

何ができるの? ... 単純な線形2項分類は一つのLTUでできる。(陰性陽性クラスの分類で、しきい値を超えたら陽。)

ではどのように学習するのか?...生物学的ニューロンは、ニューロンがほかのニューロンを発火するうちに両者の間のつながりが強化されることから、「個々のインスタンスに対して予測を行い、正しい予測を生み出すのに役立ったニューロンからの接続の重みを上げる。」

In [3]:
Image("./images/パーセプトロンの重みの更新.jpg")
Out[3]:
In [4]:
Image("./images/パーセプトロン.jpg")
Out[4]:

Note: we set max_iter and tol explicitly to avoid warnings about the fact that their default value will change in future versions of Scikit-Learn.

上記は、「2個の入力を持ち、インスタンスを同時に3種類の異なるバイナリクラスに分類できる多出力分類器」である。 (入力ニューロンは与えられた入力をそのまま出力)

補足

線形分離可能とは、幾何学においてふたつの集合が二次元平面上にあるとき、それらの集合を一本の直線で分離できることをいう。(bi wikipedia)

単一のLTUネットワークを実装する Perceptronを使って、irisデータセットを分類してみる

In [5]:
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron

iris = load_iris()
X = iris.data[:, (2, 3)]  # petal length, petal width
y = (iris.target == 0).astype(np.int)

per_clf = Perceptron(max_iter=100, tol=-np.infty, random_state=42)
per_clf.fit(X, y)

y_pred = per_clf.predict([[2, 0.5]])
In [6]:
y_pred
Out[6]:
array([1])
In [7]:
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]

axes = [0, 5, 0, 2]

x0, x1 = np.meshgrid(
        np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
        np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
    )
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)

plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")

plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])

plt.contourf(x0, x1, zz, cmap=custom_cmap)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)

save_fig("perceptron_iris_plot")
plt.show()
Saving figure perceptron_iris_plot

※ロジスティック回帰分類器とは異なり、パーセプトロンは、クラスに属する確率は出力できないことに注意。

パーセプトロンはごく簡単な問題(例えば排他的ORの分類問題)を解けないという課題があった。

その解決方法として、MLP(multi-layer-perception: 多層パーセプトロン)が編み出された。

10.1.4 MLPとバックプロパゲーション

MLPは、一つの入力層と隠れ層と呼ばれる一つ以上のLTU層、出力層から構成され、隠れ層が複数ある場合は、深層ニューラルネット(DNN:Depp Neural Network)と呼ばれる。

そして訓練アルゴリズムは、バックプロパゲーション(誤差逆伝搬)と呼ばれる。

バックプロパゲーション...個々の訓練インスタンスに対して、まず予測を行い(前進パス)、誤差を測定してから、各層を後退しながら個々の接続部の誤差への影響おを測定し(後退パス:誤差伝搬)、最後に誤差を最小にするように接続部の重みにわずかな調整を加える(勾配降下ステップ)

このアルゴリズムを正しく動作させるために、ステップ関数をロジスティック関数などの活性化関数に変更した(理由:ステップ関数はフラットな線分で構成されるため、勾配がない=微分しても0になる)

Activation functions

In [8]:
def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def relu(z):
    return np.maximum(0, z)

def derivative(f, z, eps=0.000001):
    return (f(z + eps) - f(z - eps))/(2 * eps)
In [9]:
z = np.linspace(-5, 5, 200)

plt.figure(figsize=(11,4))

plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=1, label="Step")
plt.plot(z, sigmoid(z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])

plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=1, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(sigmoid, z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14). 導関数 = 傾き
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])

save_fig("activation_functions_plot")
plt.show()
Saving figure activation_functions_plot
In [10]:
def heaviside(z):
    return (z >= 0).astype(z.dtype)

def mlp_xor(x1, x2, activation=heaviside):
    return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
In [11]:
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)

z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)

plt.figure(figsize=(10,4))

plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)

plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)

コラム

生物学的ニューロンは、おおよそシグモイドの活性化関数を使っているかのように見えるため、長い間研究されたが、ANNでは一般にReLu活性化関数の方が性能がよいことが分かっており、これは生物学からの類推がうまくいかない例である。

FNN for MNIST

MLPは個々の出力がバイナリなので、分類タスクによく使われるが、ロジスティック回帰のようにそれぞれのクラスの確率を出力したい場合は、出力層の活性化関数は、個別の活性化関数でなく、共有のソフトマックス関数を使う。このアーキテクチャは順伝番型ニューラルネット(Feedforward neural network)の例になっている。

Using the Estimator API (formerly tf.contrib.learn)

In [12]:
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
C:\Users\sawadayuki\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term

Warning: tf.examples.tutorials.mnist is deprecated. We will use tf.keras.datasets.mnist instead. Moreover, the tf.contrib.learn API was promoted to tf.estimators and tf.feature_columns, and it has changed considerably. In particular, there is no infer_real_valued_columns_from_input() function or SKCompat class.

In [13]:
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
In [14]:
from tensorflow_core.estimator import inputs
print(tf.__version__)
2.0.0
In [15]:
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300,100], n_classes=10,
                                     feature_columns=feature_cols)
#input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
#x=X_train, y=y_train, num_epochs=40, batch_size=50, shuffle=True)
input_fn = tf.estimator.inputs.numpy_input_fn(
 x={"X": X_train}, y=y_train, num_epochs=40, batch_size=50, shuffle=True)
dnn_clf.train(input_fn=input_fn)
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: C:\Users\SAWADA~1\AppData\Local\Temp\tmpsk0m16lo
INFO:tensorflow:Using config: {'_model_dir': 'C:\\Users\\SAWADA~1\\AppData\\Local\\Temp\\tmpsk0m16lo', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x0000027246EBF208>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\training\training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\inputs\queues\feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\inputs\queues\feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\canned\head.py:437: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\training\adagrad.py:76: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\training\monitored_session.py:882: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
INFO:tensorflow:Saving checkpoints for 0 into C:\Users\SAWADA~1\AppData\Local\Temp\tmpsk0m16lo\model.ckpt.
INFO:tensorflow:loss = 118.61541, step = 1
INFO:tensorflow:global_step/sec: 153.103
INFO:tensorflow:loss = 10.635891, step = 101 (0.656 sec)
INFO:tensorflow:global_step/sec: 172.834
INFO:tensorflow:loss = 14.7944765, step = 201 (0.577 sec)
INFO:tensorflow:global_step/sec: 204.307
INFO:tensorflow:loss = 3.5839367, step = 301 (0.490 sec)
INFO:tensorflow:global_step/sec: 189.034
INFO:tensorflow:loss = 8.3064785, step = 401 (0.528 sec)
INFO:tensorflow:global_step/sec: 184.76
INFO:tensorflow:loss = 3.8666134, step = 501 (0.542 sec)
INFO:tensorflow:global_step/sec: 186.277
INFO:tensorflow:loss = 5.5995507, step = 601 (0.537 sec)
INFO:tensorflow:global_step/sec: 162.601
INFO:tensorflow:loss = 6.5784874, step = 701 (0.615 sec)
INFO:tensorflow:global_step/sec: 168.765
INFO:tensorflow:loss = 9.066966, step = 801 (0.592 sec)
INFO:tensorflow:global_step/sec: 186.222
INFO:tensorflow:loss = 4.654916, step = 901 (0.537 sec)
INFO:tensorflow:global_step/sec: 173.839
INFO:tensorflow:loss = 1.7228299, step = 1001 (0.575 sec)
INFO:tensorflow:global_step/sec: 176.82
INFO:tensorflow:loss = 10.062561, step = 1101 (0.566 sec)
INFO:tensorflow:global_step/sec: 176.517
INFO:tensorflow:loss = 6.7960224, step = 1201 (0.566 sec)
INFO:tensorflow:global_step/sec: 187.013
INFO:tensorflow:loss = 11.120409, step = 1301 (0.535 sec)
INFO:tensorflow:global_step/sec: 174.216
INFO:tensorflow:loss = 11.298293, step = 1401 (0.574 sec)
INFO:tensorflow:global_step/sec: 200.863
INFO:tensorflow:loss = 1.0672297, step = 1501 (0.498 sec)
INFO:tensorflow:global_step/sec: 170.585
INFO:tensorflow:loss = 4.5257015, step = 1601 (0.586 sec)
INFO:tensorflow:global_step/sec: 173.31
INFO:tensorflow:loss = 6.5907326, step = 1701 (0.577 sec)
INFO:tensorflow:global_step/sec: 205.46
INFO:tensorflow:loss = 5.0018063, step = 1801 (0.487 sec)
INFO:tensorflow:global_step/sec: 173.311
INFO:tensorflow:loss = 8.081249, step = 1901 (0.576 sec)
INFO:tensorflow:global_step/sec: 183.157
INFO:tensorflow:loss = 7.616777, step = 2001 (0.547 sec)
INFO:tensorflow:global_step/sec: 186.217
INFO:tensorflow:loss = 3.5706048, step = 2101 (0.538 sec)
INFO:tensorflow:global_step/sec: 203.667
INFO:tensorflow:loss = 3.1061623, step = 2201 (0.490 sec)
INFO:tensorflow:global_step/sec: 195.468
INFO:tensorflow:loss = 1.3973932, step = 2301 (0.513 sec)
INFO:tensorflow:global_step/sec: 204.919
INFO:tensorflow:loss = 10.052008, step = 2401 (0.488 sec)
INFO:tensorflow:global_step/sec: 205.339
INFO:tensorflow:loss = 3.9357154, step = 2501 (0.486 sec)
INFO:tensorflow:global_step/sec: 197.372
INFO:tensorflow:loss = 5.6055493, step = 2601 (0.508 sec)
INFO:tensorflow:global_step/sec: 207.468
INFO:tensorflow:loss = 0.92949444, step = 2701 (0.480 sec)
INFO:tensorflow:global_step/sec: 199.338
INFO:tensorflow:loss = 0.77465725, step = 2801 (0.504 sec)
INFO:tensorflow:global_step/sec: 166.047
INFO:tensorflow:loss = 1.706137, step = 2901 (0.601 sec)
INFO:tensorflow:global_step/sec: 155.763
INFO:tensorflow:loss = 4.0556555, step = 3001 (0.641 sec)
INFO:tensorflow:global_step/sec: 149.935
INFO:tensorflow:loss = 1.7199905, step = 3101 (0.670 sec)
INFO:tensorflow:global_step/sec: 128.459
INFO:tensorflow:loss = 1.0255852, step = 3201 (0.779 sec)
INFO:tensorflow:global_step/sec: 142.638
INFO:tensorflow:loss = 13.109682, step = 3301 (0.698 sec)
INFO:tensorflow:global_step/sec: 146.681
INFO:tensorflow:loss = 2.6723976, step = 3401 (0.683 sec)
INFO:tensorflow:global_step/sec: 142.226
INFO:tensorflow:loss = 12.075178, step = 3501 (0.703 sec)
INFO:tensorflow:global_step/sec: 150.664
INFO:tensorflow:loss = 1.1240793, step = 3601 (0.663 sec)
INFO:tensorflow:global_step/sec: 141.637
INFO:tensorflow:loss = 0.30149853, step = 3701 (0.709 sec)
INFO:tensorflow:global_step/sec: 133.69
INFO:tensorflow:loss = 6.808881, step = 3801 (0.745 sec)
INFO:tensorflow:global_step/sec: 152.958
INFO:tensorflow:loss = 0.23710373, step = 3901 (0.658 sec)
INFO:tensorflow:global_step/sec: 163.22
INFO:tensorflow:loss = 1.1792871, step = 4001 (0.610 sec)
INFO:tensorflow:global_step/sec: 154.473
INFO:tensorflow:loss = 0.4497397, step = 4101 (0.646 sec)
INFO:tensorflow:global_step/sec: 148.589
INFO:tensorflow:loss = 0.41459012, step = 4201 (0.701 sec)
INFO:tensorflow:global_step/sec: 133.918
INFO:tensorflow:loss = 0.9059059, step = 4301 (0.720 sec)
INFO:tensorflow:global_step/sec: 162.368
INFO:tensorflow:loss = 12.728615, step = 4401 (0.646 sec)
INFO:tensorflow:global_step/sec: 150.312
INFO:tensorflow:loss = 4.3015656, step = 4501 (0.636 sec)
INFO:tensorflow:global_step/sec: 177.934
INFO:tensorflow:loss = 0.26184446, step = 4601 (0.563 sec)
INFO:tensorflow:global_step/sec: 159.027
INFO:tensorflow:loss = 0.9407847, step = 4701 (0.628 sec)
INFO:tensorflow:global_step/sec: 159.489
INFO:tensorflow:loss = 0.79899424, step = 4801 (0.628 sec)
INFO:tensorflow:global_step/sec: 168.948
INFO:tensorflow:loss = 0.31517863, step = 4901 (0.591 sec)
INFO:tensorflow:global_step/sec: 168.26
INFO:tensorflow:loss = 0.80566716, step = 5001 (0.596 sec)
INFO:tensorflow:global_step/sec: 166.389
INFO:tensorflow:loss = 0.78507394, step = 5101 (0.599 sec)
INFO:tensorflow:global_step/sec: 157.231
INFO:tensorflow:loss = 0.12808618, step = 5201 (0.637 sec)
INFO:tensorflow:global_step/sec: 168.188
INFO:tensorflow:loss = 1.2584107, step = 5301 (0.594 sec)
INFO:tensorflow:global_step/sec: 195.931
INFO:tensorflow:loss = 0.21085063, step = 5401 (0.510 sec)
INFO:tensorflow:global_step/sec: 212.086
INFO:tensorflow:loss = 2.0782802, step = 5501 (0.472 sec)
INFO:tensorflow:global_step/sec: 169.499
INFO:tensorflow:loss = 1.903318, step = 5601 (0.592 sec)
INFO:tensorflow:global_step/sec: 137.316
INFO:tensorflow:loss = 0.47383246, step = 5701 (0.728 sec)
INFO:tensorflow:global_step/sec: 167.383
INFO:tensorflow:loss = 1.2641786, step = 5801 (0.595 sec)
INFO:tensorflow:global_step/sec: 173.612
INFO:tensorflow:loss = 0.078156486, step = 5901 (0.575 sec)
INFO:tensorflow:global_step/sec: 173.912
INFO:tensorflow:loss = 0.62377506, step = 6001 (0.576 sec)
INFO:tensorflow:global_step/sec: 168.634
INFO:tensorflow:loss = 0.4149085, step = 6101 (0.593 sec)
INFO:tensorflow:global_step/sec: 179.396
INFO:tensorflow:loss = 0.4760306, step = 6201 (0.557 sec)
INFO:tensorflow:global_step/sec: 179.533
INFO:tensorflow:loss = 6.216521, step = 6301 (0.557 sec)
INFO:tensorflow:global_step/sec: 174.825
INFO:tensorflow:loss = 3.9384918, step = 6401 (0.572 sec)
INFO:tensorflow:global_step/sec: 175.439
INFO:tensorflow:loss = 0.64603186, step = 6501 (0.570 sec)
INFO:tensorflow:global_step/sec: 175.439
INFO:tensorflow:loss = 0.5766875, step = 6601 (0.570 sec)
INFO:tensorflow:global_step/sec: 178.614
INFO:tensorflow:loss = 3.6714988, step = 6701 (0.560 sec)
INFO:tensorflow:global_step/sec: 173.35
INFO:tensorflow:loss = 2.8465958, step = 6801 (0.577 sec)
INFO:tensorflow:global_step/sec: 175.747
INFO:tensorflow:loss = 0.54307985, step = 6901 (0.569 sec)
INFO:tensorflow:global_step/sec: 176.155
INFO:tensorflow:loss = 0.38450712, step = 7001 (0.568 sec)
INFO:tensorflow:global_step/sec: 175.045
INFO:tensorflow:loss = 0.20990962, step = 7101 (0.571 sec)
INFO:tensorflow:global_step/sec: 175.845
INFO:tensorflow:loss = 0.77846587, step = 7201 (0.574 sec)
INFO:tensorflow:global_step/sec: 165.712
INFO:tensorflow:loss = 0.26363817, step = 7301 (0.598 sec)
INFO:tensorflow:global_step/sec: 160
INFO:tensorflow:loss = 1.4569184, step = 7401 (0.626 sec)
INFO:tensorflow:global_step/sec: 166.645
INFO:tensorflow:loss = 1.2325481, step = 7501 (0.599 sec)
INFO:tensorflow:global_step/sec: 163.399
INFO:tensorflow:loss = 0.36565736, step = 7601 (0.612 sec)
INFO:tensorflow:global_step/sec: 165.309
INFO:tensorflow:loss = 0.033946358, step = 7701 (0.606 sec)
INFO:tensorflow:global_step/sec: 163.131
INFO:tensorflow:loss = 1.7904453, step = 7801 (0.612 sec)
INFO:tensorflow:global_step/sec: 170.492
INFO:tensorflow:loss = 0.16313513, step = 7901 (0.587 sec)
INFO:tensorflow:global_step/sec: 169.779
INFO:tensorflow:loss = 0.6136951, step = 8001 (0.590 sec)
INFO:tensorflow:global_step/sec: 169.779
INFO:tensorflow:loss = 0.16381095, step = 8101 (0.589 sec)
INFO:tensorflow:global_step/sec: 176.679
INFO:tensorflow:loss = 0.20934166, step = 8201 (0.564 sec)
INFO:tensorflow:global_step/sec: 172.049
INFO:tensorflow:loss = 0.64196444, step = 8301 (0.582 sec)
INFO:tensorflow:global_step/sec: 173.921
INFO:tensorflow:loss = 0.16858554, step = 8401 (0.575 sec)
INFO:tensorflow:global_step/sec: 174.216
INFO:tensorflow:loss = 0.21618666, step = 8501 (0.574 sec)
INFO:tensorflow:global_step/sec: 174.222
INFO:tensorflow:loss = 0.13542925, step = 8601 (0.574 sec)
INFO:tensorflow:global_step/sec: 178.179
INFO:tensorflow:loss = 0.64419633, step = 8701 (0.561 sec)
INFO:tensorflow:global_step/sec: 174.814
INFO:tensorflow:loss = 1.2956874, step = 8801 (0.572 sec)
INFO:tensorflow:global_step/sec: 179.297
INFO:tensorflow:loss = 0.68471724, step = 8901 (0.558 sec)
INFO:tensorflow:global_step/sec: 168.259
INFO:tensorflow:loss = 0.23801582, step = 9001 (0.594 sec)
INFO:tensorflow:global_step/sec: 173.682
INFO:tensorflow:loss = 1.017976, step = 9101 (0.576 sec)
INFO:tensorflow:global_step/sec: 173.988
INFO:tensorflow:loss = 1.4106146, step = 9201 (0.575 sec)
INFO:tensorflow:global_step/sec: 176.587
INFO:tensorflow:loss = 0.055051796, step = 9301 (0.566 sec)
INFO:tensorflow:global_step/sec: 174.089
INFO:tensorflow:loss = 0.40352142, step = 9401 (0.574 sec)
INFO:tensorflow:global_step/sec: 174.284
INFO:tensorflow:loss = 0.37240404, step = 9501 (0.574 sec)
INFO:tensorflow:global_step/sec: 171.098
INFO:tensorflow:loss = 0.8442632, step = 9601 (0.584 sec)
INFO:tensorflow:global_step/sec: 175.131
INFO:tensorflow:loss = 0.12139091, step = 9701 (0.571 sec)
INFO:tensorflow:global_step/sec: 176.768
INFO:tensorflow:loss = 0.42165372, step = 9801 (0.565 sec)
INFO:tensorflow:global_step/sec: 165.563
INFO:tensorflow:loss = 0.6776107, step = 9901 (0.605 sec)
INFO:tensorflow:global_step/sec: 183.182
INFO:tensorflow:loss = 0.2386456, step = 10001 (0.546 sec)
INFO:tensorflow:global_step/sec: 173.406
INFO:tensorflow:loss = 0.6473155, step = 10101 (0.578 sec)
INFO:tensorflow:global_step/sec: 163.934
INFO:tensorflow:loss = 0.0283563, step = 10201 (0.609 sec)
INFO:tensorflow:global_step/sec: 169.644
INFO:tensorflow:loss = 0.4385865, step = 10301 (0.589 sec)
INFO:tensorflow:global_step/sec: 165.289
INFO:tensorflow:loss = 0.1771241, step = 10401 (0.606 sec)
INFO:tensorflow:global_step/sec: 164.093
INFO:tensorflow:loss = 0.829162, step = 10501 (0.609 sec)
INFO:tensorflow:global_step/sec: 172.4
INFO:tensorflow:loss = 0.08003402, step = 10601 (0.579 sec)
INFO:tensorflow:global_step/sec: 171.961
INFO:tensorflow:loss = 0.16251743, step = 10701 (0.582 sec)
INFO:tensorflow:global_step/sec: 166.55
INFO:tensorflow:loss = 0.118416145, step = 10801 (0.600 sec)
INFO:tensorflow:global_step/sec: 175.44
INFO:tensorflow:loss = 0.4046926, step = 10901 (0.570 sec)
INFO:tensorflow:global_step/sec: 173.398
INFO:tensorflow:loss = 0.20587677, step = 11001 (0.577 sec)
INFO:tensorflow:global_step/sec: 174.825
INFO:tensorflow:loss = 2.3998709, step = 11101 (0.572 sec)
INFO:tensorflow:global_step/sec: 174.643
INFO:tensorflow:loss = 0.16364051, step = 11201 (0.572 sec)
INFO:tensorflow:global_step/sec: 179.856
INFO:tensorflow:loss = 0.14647059, step = 11301 (0.557 sec)
INFO:tensorflow:global_step/sec: 178.042
INFO:tensorflow:loss = 0.9632838, step = 11401 (0.562 sec)
INFO:tensorflow:global_step/sec: 173.663
INFO:tensorflow:loss = 0.72586083, step = 11501 (0.576 sec)
INFO:tensorflow:global_step/sec: 176.991
INFO:tensorflow:loss = 0.14638773, step = 11601 (0.565 sec)
INFO:tensorflow:global_step/sec: 173.391
INFO:tensorflow:loss = 0.11325623, step = 11701 (0.577 sec)
INFO:tensorflow:global_step/sec: 177.619
INFO:tensorflow:loss = 0.14279656, step = 11801 (0.562 sec)
INFO:tensorflow:global_step/sec: 175.289
INFO:tensorflow:loss = 0.1031511, step = 11901 (0.571 sec)
INFO:tensorflow:global_step/sec: 177.677
INFO:tensorflow:loss = 1.2237806, step = 12001 (0.563 sec)
INFO:tensorflow:global_step/sec: 174.825
INFO:tensorflow:loss = 0.15291882, step = 12101 (0.571 sec)
INFO:tensorflow:global_step/sec: 174.99
INFO:tensorflow:loss = 0.059587292, step = 12201 (0.572 sec)
INFO:tensorflow:global_step/sec: 172.496
INFO:tensorflow:loss = 0.08333719, step = 12301 (0.580 sec)
INFO:tensorflow:global_step/sec: 173.583
INFO:tensorflow:loss = 0.3056171, step = 12401 (0.576 sec)
INFO:tensorflow:global_step/sec: 173.913
INFO:tensorflow:loss = 0.031241585, step = 12501 (0.575 sec)
INFO:tensorflow:global_step/sec: 175.463
INFO:tensorflow:loss = 0.05428356, step = 12601 (0.593 sec)
INFO:tensorflow:global_step/sec: 147.518
INFO:tensorflow:loss = 0.16463007, step = 12701 (0.655 sec)
INFO:tensorflow:global_step/sec: 175.662
INFO:tensorflow:loss = 0.22177033, step = 12801 (0.569 sec)
INFO:tensorflow:global_step/sec: 164.2
INFO:tensorflow:loss = 0.08371711, step = 12901 (0.609 sec)
INFO:tensorflow:global_step/sec: 165.724
INFO:tensorflow:loss = 0.32718349, step = 13001 (0.604 sec)
INFO:tensorflow:global_step/sec: 161.637
INFO:tensorflow:loss = 0.07212632, step = 13101 (0.618 sec)
INFO:tensorflow:global_step/sec: 165.39
INFO:tensorflow:loss = 0.10662498, step = 13201 (0.605 sec)
INFO:tensorflow:global_step/sec: 163.117
INFO:tensorflow:loss = 0.1692789, step = 13301 (0.613 sec)
INFO:tensorflow:global_step/sec: 161.183
INFO:tensorflow:loss = 0.087721266, step = 13401 (0.621 sec)
INFO:tensorflow:global_step/sec: 168.043
INFO:tensorflow:loss = 0.16587162, step = 13501 (0.593 sec)
INFO:tensorflow:global_step/sec: 169.814
INFO:tensorflow:loss = 0.082947366, step = 13601 (0.590 sec)
INFO:tensorflow:global_step/sec: 179.377
INFO:tensorflow:loss = 0.08115257, step = 13701 (0.557 sec)
INFO:tensorflow:global_step/sec: 176.127
INFO:tensorflow:loss = 0.67950124, step = 13801 (0.568 sec)
INFO:tensorflow:global_step/sec: 178.336
INFO:tensorflow:loss = 0.047406666, step = 13901 (0.561 sec)
INFO:tensorflow:global_step/sec: 178.826
INFO:tensorflow:loss = 0.033028934, step = 14001 (0.559 sec)
INFO:tensorflow:global_step/sec: 174.499
INFO:tensorflow:loss = 0.07802386, step = 14101 (0.573 sec)
INFO:tensorflow:global_step/sec: 175.411
INFO:tensorflow:loss = 0.07776525, step = 14201 (0.570 sec)
INFO:tensorflow:global_step/sec: 173.435
INFO:tensorflow:loss = 0.3555473, step = 14301 (0.577 sec)
INFO:tensorflow:global_step/sec: 171.997
INFO:tensorflow:loss = 0.030182883, step = 14401 (0.586 sec)
INFO:tensorflow:global_step/sec: 168.045
INFO:tensorflow:loss = 0.02758979, step = 14501 (0.619 sec)
INFO:tensorflow:global_step/sec: 153.967
INFO:tensorflow:loss = 0.15149277, step = 14601 (0.620 sec)
INFO:tensorflow:global_step/sec: 174.52
INFO:tensorflow:loss = 0.055869386, step = 14701 (1.523 sec)
INFO:tensorflow:global_step/sec: 62.4611
INFO:tensorflow:loss = 0.16884449, step = 14801 (1.410 sec)
INFO:tensorflow:global_step/sec: 75.9879
INFO:tensorflow:loss = 0.08429648, step = 14901 (0.557 sec)
INFO:tensorflow:global_step/sec: 176.335
INFO:tensorflow:loss = 0.08530213, step = 15001 (0.567 sec)
INFO:tensorflow:global_step/sec: 174.149
INFO:tensorflow:loss = 0.049884252, step = 15101 (0.574 sec)
INFO:tensorflow:global_step/sec: 175.438
INFO:tensorflow:loss = 1.0000422, step = 15201 (0.570 sec)
INFO:tensorflow:global_step/sec: 168.769
INFO:tensorflow:loss = 0.24983375, step = 15301 (0.594 sec)
INFO:tensorflow:global_step/sec: 166.437
INFO:tensorflow:loss = 0.11045547, step = 15401 (0.600 sec)
INFO:tensorflow:global_step/sec: 163.665
INFO:tensorflow:loss = 0.75084144, step = 15501 (0.611 sec)
INFO:tensorflow:global_step/sec: 168.921
INFO:tensorflow:loss = 0.04440306, step = 15601 (0.593 sec)
INFO:tensorflow:global_step/sec: 170.069
INFO:tensorflow:loss = 0.07690385, step = 15701 (0.588 sec)
INFO:tensorflow:global_step/sec: 169.901
INFO:tensorflow:loss = 0.048189495, step = 15801 (0.589 sec)
INFO:tensorflow:global_step/sec: 167.594
INFO:tensorflow:loss = 0.14735082, step = 15901 (0.597 sec)
INFO:tensorflow:global_step/sec: 163.413
INFO:tensorflow:loss = 0.10709701, step = 16001 (0.612 sec)
INFO:tensorflow:global_step/sec: 177.156
INFO:tensorflow:loss = 0.10743818, step = 16101 (0.563 sec)
INFO:tensorflow:global_step/sec: 173.911
INFO:tensorflow:loss = 0.013925659, step = 16201 (0.575 sec)
INFO:tensorflow:global_step/sec: 176.143
INFO:tensorflow:loss = 0.08837832, step = 16301 (0.568 sec)
INFO:tensorflow:global_step/sec: 179.976
INFO:tensorflow:loss = 0.19975531, step = 16401 (0.556 sec)
INFO:tensorflow:global_step/sec: 178.974
INFO:tensorflow:loss = 0.26177642, step = 16501 (0.559 sec)
INFO:tensorflow:global_step/sec: 176.345
INFO:tensorflow:loss = 0.04953559, step = 16601 (0.567 sec)
INFO:tensorflow:global_step/sec: 178.595
INFO:tensorflow:loss = 0.2777514, step = 16701 (0.560 sec)
INFO:tensorflow:global_step/sec: 175.445
INFO:tensorflow:loss = 0.007946213, step = 16801 (0.570 sec)
INFO:tensorflow:global_step/sec: 179.533
INFO:tensorflow:loss = 0.16544324, step = 16901 (0.557 sec)
INFO:tensorflow:global_step/sec: 174.339
INFO:tensorflow:loss = 0.0753012, step = 17001 (0.574 sec)
INFO:tensorflow:global_step/sec: 179.857
INFO:tensorflow:loss = 0.03154515, step = 17101 (0.556 sec)
INFO:tensorflow:global_step/sec: 185.186
INFO:tensorflow:loss = 0.08874963, step = 17201 (0.540 sec)
INFO:tensorflow:global_step/sec: 123.571
INFO:tensorflow:loss = 0.06685432, step = 17301 (0.810 sec)
INFO:tensorflow:global_step/sec: 165.355
INFO:tensorflow:loss = 0.14223, step = 17401 (0.604 sec)
INFO:tensorflow:global_step/sec: 174.831
INFO:tensorflow:loss = 0.09996934, step = 17501 (0.572 sec)
INFO:tensorflow:global_step/sec: 178.572
INFO:tensorflow:loss = 0.004346591, step = 17601 (0.560 sec)
INFO:tensorflow:global_step/sec: 175.658
INFO:tensorflow:loss = 0.025849905, step = 17701 (0.569 sec)
INFO:tensorflow:global_step/sec: 178.572
INFO:tensorflow:loss = 0.11908143, step = 17801 (0.560 sec)
INFO:tensorflow:global_step/sec: 173.726
INFO:tensorflow:loss = 0.051559273, step = 17901 (0.576 sec)
INFO:tensorflow:global_step/sec: 174.157
INFO:tensorflow:loss = 0.020592447, step = 18001 (0.574 sec)
INFO:tensorflow:global_step/sec: 156.77
INFO:tensorflow:loss = 0.020591816, step = 18101 (0.659 sec)
INFO:tensorflow:global_step/sec: 153.97
INFO:tensorflow:loss = 0.23319001, step = 18201 (0.628 sec)
INFO:tensorflow:global_step/sec: 152.375
INFO:tensorflow:loss = 0.026477216, step = 18301 (0.657 sec)
INFO:tensorflow:global_step/sec: 172.612
INFO:tensorflow:loss = 0.04322593, step = 18401 (0.578 sec)
INFO:tensorflow:global_step/sec: 171.232
INFO:tensorflow:loss = 0.038036816, step = 18501 (0.585 sec)
INFO:tensorflow:global_step/sec: 173.131
INFO:tensorflow:loss = 0.0046293274, step = 18601 (0.578 sec)
INFO:tensorflow:global_step/sec: 171.784
INFO:tensorflow:loss = 0.0788873, step = 18701 (0.581 sec)
INFO:tensorflow:global_step/sec: 172.799
INFO:tensorflow:loss = 0.1688315, step = 18801 (0.579 sec)
INFO:tensorflow:global_step/sec: 187.817
INFO:tensorflow:loss = 0.14438865, step = 18901 (0.532 sec)
INFO:tensorflow:global_step/sec: 171.829
INFO:tensorflow:loss = 0.031400472, step = 19001 (0.582 sec)
INFO:tensorflow:global_step/sec: 174.982
INFO:tensorflow:loss = 0.015826223, step = 19101 (0.571 sec)
INFO:tensorflow:global_step/sec: 179.213
INFO:tensorflow:loss = 0.05454314, step = 19201 (0.558 sec)
INFO:tensorflow:global_step/sec: 173.8
INFO:tensorflow:loss = 0.014550315, step = 19301 (0.575 sec)
INFO:tensorflow:global_step/sec: 178.571
INFO:tensorflow:loss = 0.065482385, step = 19401 (0.560 sec)
INFO:tensorflow:global_step/sec: 177.936
INFO:tensorflow:loss = 0.024312727, step = 19501 (0.561 sec)
INFO:tensorflow:global_step/sec: 161.649
INFO:tensorflow:loss = 0.005215077, step = 19601 (0.620 sec)
INFO:tensorflow:global_step/sec: 171.709
INFO:tensorflow:loss = 0.022685928, step = 19701 (0.582 sec)
INFO:tensorflow:global_step/sec: 176.979
INFO:tensorflow:loss = 0.036048297, step = 19801 (0.566 sec)
INFO:tensorflow:global_step/sec: 163.625
INFO:tensorflow:loss = 0.04686625, step = 19901 (0.658 sec)
INFO:tensorflow:global_step/sec: 172.072
INFO:tensorflow:loss = 0.083779976, step = 20001 (0.532 sec)
INFO:tensorflow:global_step/sec: 176.85
INFO:tensorflow:loss = 0.030196063, step = 20101 (0.566 sec)
INFO:tensorflow:global_step/sec: 176.056
INFO:tensorflow:loss = 0.0803151, step = 20201 (0.568 sec)
INFO:tensorflow:global_step/sec: 173.197
INFO:tensorflow:loss = 0.0019060642, step = 20301 (0.576 sec)
INFO:tensorflow:global_step/sec: 175.13
INFO:tensorflow:loss = 0.04063434, step = 20401 (0.572 sec)
INFO:tensorflow:global_step/sec: 173.964
INFO:tensorflow:loss = 0.0830771, step = 20501 (0.575 sec)
INFO:tensorflow:global_step/sec: 176.368
INFO:tensorflow:loss = 0.0588987, step = 20601 (0.567 sec)
INFO:tensorflow:global_step/sec: 174.431
INFO:tensorflow:loss = 0.03261136, step = 20701 (0.573 sec)
INFO:tensorflow:global_step/sec: 164.486
INFO:tensorflow:loss = 0.051912107, step = 20801 (0.609 sec)
INFO:tensorflow:global_step/sec: 164.281
INFO:tensorflow:loss = 0.16902862, step = 20901 (0.610 sec)
INFO:tensorflow:global_step/sec: 160.824
INFO:tensorflow:loss = 0.050030533, step = 21001 (0.620 sec)
INFO:tensorflow:global_step/sec: 164.924
INFO:tensorflow:loss = 0.04416498, step = 21101 (0.607 sec)
INFO:tensorflow:global_step/sec: 168.561
INFO:tensorflow:loss = 0.042506505, step = 21201 (0.593 sec)
INFO:tensorflow:global_step/sec: 163.132
INFO:tensorflow:loss = 0.067921415, step = 21301 (0.613 sec)
INFO:tensorflow:global_step/sec: 165.702
INFO:tensorflow:loss = 0.06025548, step = 21401 (0.602 sec)
INFO:tensorflow:global_step/sec: 165.836
INFO:tensorflow:loss = 0.17194714, step = 21501 (0.603 sec)
INFO:tensorflow:global_step/sec: 179.45
INFO:tensorflow:loss = 0.081030175, step = 21601 (0.557 sec)
INFO:tensorflow:global_step/sec: 174.222
INFO:tensorflow:loss = 0.0011167721, step = 21701 (0.574 sec)
INFO:tensorflow:global_step/sec: 174.825
INFO:tensorflow:loss = 0.06830389, step = 21801 (0.572 sec)
INFO:tensorflow:global_step/sec: 172.511
INFO:tensorflow:loss = 0.08731497, step = 21901 (0.579 sec)
INFO:tensorflow:global_step/sec: 179.533
INFO:tensorflow:loss = 0.054533303, step = 22001 (0.558 sec)
INFO:tensorflow:global_step/sec: 174.882
INFO:tensorflow:loss = 0.038936563, step = 22101 (0.572 sec)
INFO:tensorflow:global_step/sec: 176.057
INFO:tensorflow:loss = 0.13902038, step = 22201 (0.568 sec)
INFO:tensorflow:global_step/sec: 169.182
INFO:tensorflow:loss = 0.09799403, step = 22301 (0.591 sec)
INFO:tensorflow:global_step/sec: 174.94
INFO:tensorflow:loss = 0.01287956, step = 22401 (0.572 sec)
INFO:tensorflow:global_step/sec: 175.44
INFO:tensorflow:loss = 0.0042184195, step = 22501 (0.570 sec)
INFO:tensorflow:global_step/sec: 173.083
INFO:tensorflow:loss = 0.0069257924, step = 22601 (0.578 sec)
INFO:tensorflow:global_step/sec: 170.309
INFO:tensorflow:loss = 0.063471116, step = 22701 (0.587 sec)
INFO:tensorflow:global_step/sec: 173.418
INFO:tensorflow:loss = 0.03470261, step = 22801 (0.577 sec)
INFO:tensorflow:global_step/sec: 175.44
INFO:tensorflow:loss = 0.027272206, step = 22901 (0.570 sec)
INFO:tensorflow:global_step/sec: 174.215
INFO:tensorflow:loss = 0.077978455, step = 23001 (0.574 sec)
INFO:tensorflow:global_step/sec: 178.782
INFO:tensorflow:loss = 0.09319438, step = 23101 (0.559 sec)
INFO:tensorflow:global_step/sec: 174.968
INFO:tensorflow:loss = 0.035563927, step = 23201 (0.572 sec)
INFO:tensorflow:global_step/sec: 175.209
INFO:tensorflow:loss = 0.041235916, step = 23301 (0.571 sec)
INFO:tensorflow:global_step/sec: 172.834
INFO:tensorflow:loss = 0.009381075, step = 23401 (0.579 sec)
INFO:tensorflow:global_step/sec: 179.46
INFO:tensorflow:loss = 0.0324748, step = 23501 (0.557 sec)
INFO:tensorflow:global_step/sec: 164.299
INFO:tensorflow:loss = 0.056843434, step = 23601 (0.609 sec)
INFO:tensorflow:global_step/sec: 174.047
INFO:tensorflow:loss = 0.0034896426, step = 23701 (0.575 sec)
INFO:tensorflow:global_step/sec: 176.8
INFO:tensorflow:loss = 0.0065230276, step = 23801 (0.565 sec)
INFO:tensorflow:global_step/sec: 172.464
INFO:tensorflow:loss = 0.015688892, step = 23901 (0.581 sec)
INFO:tensorflow:global_step/sec: 171.421
INFO:tensorflow:loss = 0.10999705, step = 24001 (0.582 sec)
INFO:tensorflow:global_step/sec: 169.205
INFO:tensorflow:loss = 0.034819383, step = 24101 (0.591 sec)
INFO:tensorflow:global_step/sec: 171.454
INFO:tensorflow:loss = 0.0008095294, step = 24201 (0.584 sec)
INFO:tensorflow:global_step/sec: 180.721
INFO:tensorflow:loss = 0.031165535, step = 24301 (0.552 sec)
INFO:tensorflow:global_step/sec: 178.331
INFO:tensorflow:loss = 0.027350005, step = 24401 (0.561 sec)
INFO:tensorflow:global_step/sec: 175.809
INFO:tensorflow:loss = 0.043521635, step = 24501 (0.568 sec)
INFO:tensorflow:global_step/sec: 179.211
INFO:tensorflow:loss = 0.06472516, step = 24601 (0.558 sec)
INFO:tensorflow:global_step/sec: 173.669
INFO:tensorflow:loss = 0.04345562, step = 24701 (0.576 sec)
INFO:tensorflow:global_step/sec: 179.321
INFO:tensorflow:loss = 0.03166969, step = 24801 (0.557 sec)
INFO:tensorflow:global_step/sec: 178.65
INFO:tensorflow:loss = 0.09581043, step = 24901 (0.561 sec)
INFO:tensorflow:global_step/sec: 174.188
INFO:tensorflow:loss = 0.010970813, step = 25001 (0.574 sec)
INFO:tensorflow:global_step/sec: 163.398
INFO:tensorflow:loss = 0.05000235, step = 25101 (0.612 sec)
INFO:tensorflow:global_step/sec: 172.767
INFO:tensorflow:loss = 0.011566881, step = 25201 (0.579 sec)
INFO:tensorflow:global_step/sec: 184.845
INFO:tensorflow:loss = 0.0054175123, step = 25301 (0.541 sec)
INFO:tensorflow:global_step/sec: 175.656
INFO:tensorflow:loss = 0.06370527, step = 25401 (0.569 sec)
INFO:tensorflow:global_step/sec: 180.058
INFO:tensorflow:loss = 0.07577646, step = 25501 (0.555 sec)
INFO:tensorflow:global_step/sec: 175.131
INFO:tensorflow:loss = 0.034274492, step = 25601 (0.571 sec)
INFO:tensorflow:global_step/sec: 172.915
INFO:tensorflow:loss = 0.055803187, step = 25701 (0.578 sec)
INFO:tensorflow:global_step/sec: 194.178
INFO:tensorflow:loss = 0.045042004, step = 25801 (0.514 sec)
INFO:tensorflow:global_step/sec: 187.343
INFO:tensorflow:loss = 0.0034470502, step = 25901 (0.534 sec)
INFO:tensorflow:global_step/sec: 174.737
INFO:tensorflow:loss = 0.0063578538, step = 26001 (0.573 sec)
INFO:tensorflow:global_step/sec: 155.28
INFO:tensorflow:loss = 0.0019137781, step = 26101 (0.643 sec)
INFO:tensorflow:global_step/sec: 181.32
INFO:tensorflow:loss = 0.034895387, step = 26201 (0.552 sec)
INFO:tensorflow:global_step/sec: 172.767
INFO:tensorflow:loss = 0.021306735, step = 26301 (0.579 sec)
INFO:tensorflow:global_step/sec: 157.973
INFO:tensorflow:loss = 0.05431043, step = 26401 (0.633 sec)
INFO:tensorflow:global_step/sec: 173.009
INFO:tensorflow:loss = 0.14460896, step = 26501 (0.579 sec)
INFO:tensorflow:global_step/sec: 160
INFO:tensorflow:loss = 0.0002007425, step = 26601 (0.624 sec)
INFO:tensorflow:global_step/sec: 166.944
INFO:tensorflow:loss = 0.086669706, step = 26701 (0.600 sec)
INFO:tensorflow:global_step/sec: 161.521
INFO:tensorflow:loss = 0.050536156, step = 26801 (0.619 sec)
INFO:tensorflow:global_step/sec: 171.973
INFO:tensorflow:loss = 0.020492004, step = 26901 (0.581 sec)
INFO:tensorflow:global_step/sec: 170.95
INFO:tensorflow:loss = 0.021951484, step = 27001 (0.585 sec)
INFO:tensorflow:global_step/sec: 171.302
INFO:tensorflow:loss = 0.01023348, step = 27101 (0.582 sec)
INFO:tensorflow:global_step/sec: 179.213
INFO:tensorflow:loss = 0.046139903, step = 27201 (0.559 sec)
INFO:tensorflow:global_step/sec: 175.203
INFO:tensorflow:loss = 0.076569736, step = 27301 (0.571 sec)
INFO:tensorflow:global_step/sec: 178.891
INFO:tensorflow:loss = 0.027040463, step = 27401 (0.559 sec)
INFO:tensorflow:global_step/sec: 173.772
INFO:tensorflow:loss = 0.05132245, step = 27501 (0.575 sec)
INFO:tensorflow:global_step/sec: 175.3
INFO:tensorflow:loss = 0.032442078, step = 27601 (0.570 sec)
INFO:tensorflow:global_step/sec: 180.18
INFO:tensorflow:loss = 0.039306372, step = 27701 (0.556 sec)
INFO:tensorflow:global_step/sec: 175.222
INFO:tensorflow:loss = 0.023783434, step = 27801 (0.570 sec)
INFO:tensorflow:global_step/sec: 175.083
INFO:tensorflow:loss = 0.052439, step = 27901 (0.571 sec)
INFO:tensorflow:global_step/sec: 173.826
INFO:tensorflow:loss = 0.011218366, step = 28001 (0.575 sec)
INFO:tensorflow:global_step/sec: 179.58
INFO:tensorflow:loss = 0.07354784, step = 28101 (0.557 sec)
INFO:tensorflow:global_step/sec: 173.137
INFO:tensorflow:loss = 0.0027483408, step = 28201 (0.578 sec)
INFO:tensorflow:global_step/sec: 175.635
INFO:tensorflow:loss = 0.008700022, step = 28301 (0.569 sec)
INFO:tensorflow:global_step/sec: 173.311
INFO:tensorflow:loss = 0.06555164, step = 28401 (0.577 sec)
INFO:tensorflow:global_step/sec: 174.931
INFO:tensorflow:loss = 0.017443413, step = 28501 (0.572 sec)
INFO:tensorflow:global_step/sec: 174.52
INFO:tensorflow:loss = 0.046724353, step = 28601 (0.573 sec)
INFO:tensorflow:global_step/sec: 176.645
INFO:tensorflow:loss = 0.03898704, step = 28701 (0.566 sec)
INFO:tensorflow:global_step/sec: 172.976
INFO:tensorflow:loss = 0.027161373, step = 28801 (0.578 sec)
INFO:tensorflow:global_step/sec: 173.912
INFO:tensorflow:loss = 0.082004994, step = 28901 (0.575 sec)
INFO:tensorflow:global_step/sec: 175.775
INFO:tensorflow:loss = 0.012499042, step = 29001 (0.569 sec)
INFO:tensorflow:global_step/sec: 166.502
INFO:tensorflow:loss = 0.019908478, step = 29101 (0.602 sec)
INFO:tensorflow:global_step/sec: 163.43
INFO:tensorflow:loss = 0.034308165, step = 29201 (0.612 sec)
INFO:tensorflow:global_step/sec: 156.986
INFO:tensorflow:loss = 0.05909121, step = 29301 (0.636 sec)
INFO:tensorflow:global_step/sec: 163.757
INFO:tensorflow:loss = 0.038152788, step = 29401 (0.611 sec)
INFO:tensorflow:global_step/sec: 168.208
INFO:tensorflow:loss = 0.017562907, step = 29501 (0.596 sec)
INFO:tensorflow:global_step/sec: 164.94
INFO:tensorflow:loss = 0.020669652, step = 29601 (0.606 sec)
INFO:tensorflow:global_step/sec: 167.708
INFO:tensorflow:loss = 0.03259717, step = 29701 (0.596 sec)
INFO:tensorflow:global_step/sec: 173.178
INFO:tensorflow:loss = 0.041636553, step = 29801 (0.577 sec)
INFO:tensorflow:global_step/sec: 170.637
INFO:tensorflow:loss = 0.038367845, step = 29901 (0.585 sec)
INFO:tensorflow:global_step/sec: 179.437
INFO:tensorflow:loss = 0.05253579, step = 30001 (0.557 sec)
INFO:tensorflow:global_step/sec: 172.815
INFO:tensorflow:loss = 0.011722049, step = 30101 (0.579 sec)
INFO:tensorflow:global_step/sec: 165.215
INFO:tensorflow:loss = 0.017304488, step = 30201 (0.605 sec)
INFO:tensorflow:global_step/sec: 171.084
INFO:tensorflow:loss = 0.037153855, step = 30301 (0.585 sec)
INFO:tensorflow:global_step/sec: 176.314
INFO:tensorflow:loss = 0.16244668, step = 30401 (0.567 sec)
INFO:tensorflow:global_step/sec: 171.271
INFO:tensorflow:loss = 0.017920703, step = 30501 (0.584 sec)
INFO:tensorflow:global_step/sec: 173.065
INFO:tensorflow:loss = 0.035627976, step = 30601 (0.578 sec)
INFO:tensorflow:global_step/sec: 172.955
INFO:tensorflow:loss = 0.034683198, step = 30701 (0.577 sec)
INFO:tensorflow:global_step/sec: 177.868
INFO:tensorflow:loss = 0.06340095, step = 30801 (0.563 sec)
INFO:tensorflow:global_step/sec: 174.552
INFO:tensorflow:loss = 0.04359825, step = 30901 (0.573 sec)
INFO:tensorflow:global_step/sec: 173.477
INFO:tensorflow:loss = 0.02211849, step = 31001 (0.576 sec)
INFO:tensorflow:global_step/sec: 173.195
INFO:tensorflow:loss = 0.025309794, step = 31101 (0.577 sec)
INFO:tensorflow:global_step/sec: 177.869
INFO:tensorflow:loss = 0.042125974, step = 31201 (0.562 sec)
INFO:tensorflow:global_step/sec: 175.645
INFO:tensorflow:loss = 0.017749788, step = 31301 (0.571 sec)
INFO:tensorflow:global_step/sec: 165.54
INFO:tensorflow:loss = 0.062537596, step = 31401 (0.603 sec)
INFO:tensorflow:global_step/sec: 183.007
INFO:tensorflow:loss = 0.087863624, step = 31501 (0.545 sec)
INFO:tensorflow:global_step/sec: 189.884
INFO:tensorflow:loss = 0.006662582, step = 31601 (0.526 sec)
INFO:tensorflow:global_step/sec: 178.944
INFO:tensorflow:loss = 0.033618234, step = 31701 (0.559 sec)
INFO:tensorflow:global_step/sec: 174.828
INFO:tensorflow:loss = 0.020037718, step = 31801 (0.572 sec)
INFO:tensorflow:global_step/sec: 160.512
INFO:tensorflow:loss = 0.030044276, step = 31901 (0.624 sec)
INFO:tensorflow:global_step/sec: 164.043
INFO:tensorflow:loss = 0.057461046, step = 32001 (0.610 sec)
INFO:tensorflow:global_step/sec: 167.907
INFO:tensorflow:loss = 0.01402814, step = 32101 (0.596 sec)
INFO:tensorflow:global_step/sec: 161.461
INFO:tensorflow:loss = 0.0132979425, step = 32201 (0.619 sec)
INFO:tensorflow:global_step/sec: 171.169
INFO:tensorflow:loss = 0.02816095, step = 32301 (0.584 sec)
INFO:tensorflow:global_step/sec: 168.186
INFO:tensorflow:loss = 0.047437683, step = 32401 (0.594 sec)
INFO:tensorflow:global_step/sec: 162.786
INFO:tensorflow:loss = 0.044473954, step = 32501 (0.615 sec)
INFO:tensorflow:global_step/sec: 168.068
INFO:tensorflow:loss = 0.0047298856, step = 32601 (0.594 sec)
INFO:tensorflow:global_step/sec: 178.928
INFO:tensorflow:loss = 0.018300455, step = 32701 (0.559 sec)
INFO:tensorflow:global_step/sec: 174.215
INFO:tensorflow:loss = 0.087423556, step = 32801 (0.574 sec)
INFO:tensorflow:global_step/sec: 174.521
INFO:tensorflow:loss = 0.026247056, step = 32901 (0.573 sec)
INFO:tensorflow:global_step/sec: 179.314
INFO:tensorflow:loss = 0.0047382647, step = 33001 (0.558 sec)
INFO:tensorflow:global_step/sec: 177.767
INFO:tensorflow:loss = 0.03915066, step = 33101 (0.564 sec)
INFO:tensorflow:global_step/sec: 177.623
INFO:tensorflow:loss = 0.0049777124, step = 33201 (0.561 sec)
INFO:tensorflow:global_step/sec: 175.133
INFO:tensorflow:loss = 0.004785289, step = 33301 (0.572 sec)
INFO:tensorflow:global_step/sec: 175.048
INFO:tensorflow:loss = 0.025753273, step = 33401 (0.571 sec)
INFO:tensorflow:global_step/sec: 175.422
INFO:tensorflow:loss = 0.01422596, step = 33501 (0.570 sec)
INFO:tensorflow:global_step/sec: 179.533
INFO:tensorflow:loss = 0.016675102, step = 33601 (0.556 sec)
INFO:tensorflow:global_step/sec: 174.882
INFO:tensorflow:loss = 0.012423555, step = 33701 (0.573 sec)
INFO:tensorflow:global_step/sec: 175.438
INFO:tensorflow:loss = 0.002602762, step = 33801 (0.570 sec)
INFO:tensorflow:global_step/sec: 178.116
INFO:tensorflow:loss = 0.0058739027, step = 33901 (0.561 sec)
INFO:tensorflow:global_step/sec: 175.131
INFO:tensorflow:loss = 0.05104249, step = 34001 (0.571 sec)
INFO:tensorflow:global_step/sec: 174.629
INFO:tensorflow:loss = 0.051842503, step = 34101 (0.573 sec)
INFO:tensorflow:global_step/sec: 173.535
INFO:tensorflow:loss = 0.02394399, step = 34201 (0.575 sec)
INFO:tensorflow:global_step/sec: 178.572
INFO:tensorflow:loss = 0.006257884, step = 34301 (0.560 sec)
INFO:tensorflow:global_step/sec: 178.73
INFO:tensorflow:loss = 0.043458547, step = 34401 (0.561 sec)
INFO:tensorflow:global_step/sec: 175.132
INFO:tensorflow:loss = 0.029802458, step = 34501 (0.571 sec)
INFO:tensorflow:global_step/sec: 167.499
INFO:tensorflow:loss = 0.018446762, step = 34601 (0.597 sec)
INFO:tensorflow:global_step/sec: 168.441
INFO:tensorflow:loss = 0.005036732, step = 34701 (0.594 sec)
INFO:tensorflow:global_step/sec: 162.075
INFO:tensorflow:loss = 0.009132434, step = 34801 (0.618 sec)
INFO:tensorflow:global_step/sec: 165.607
INFO:tensorflow:loss = 0.039977722, step = 34901 (0.604 sec)
INFO:tensorflow:global_step/sec: 164.556
INFO:tensorflow:loss = 0.014044449, step = 35001 (0.607 sec)
INFO:tensorflow:global_step/sec: 169.479
INFO:tensorflow:loss = 0.026804361, step = 35101 (0.591 sec)
INFO:tensorflow:global_step/sec: 163.461
INFO:tensorflow:loss = 0.007357258, step = 35201 (0.611 sec)
INFO:tensorflow:global_step/sec: 173.48
INFO:tensorflow:loss = 0.027144013, step = 35301 (0.576 sec)
INFO:tensorflow:global_step/sec: 171.722
INFO:tensorflow:loss = 0.02607493, step = 35401 (0.582 sec)
INFO:tensorflow:global_step/sec: 166.352
INFO:tensorflow:loss = 0.0077054393, step = 35501 (0.607 sec)
INFO:tensorflow:global_step/sec: 169.934
INFO:tensorflow:loss = 0.006032743, step = 35601 (0.583 sec)
INFO:tensorflow:global_step/sec: 176.992
INFO:tensorflow:loss = 0.042079628, step = 35701 (0.566 sec)
INFO:tensorflow:global_step/sec: 175.476
INFO:tensorflow:loss = 0.011502642, step = 35801 (0.569 sec)
INFO:tensorflow:global_step/sec: 174.489
INFO:tensorflow:loss = 0.040260255, step = 35901 (0.572 sec)
INFO:tensorflow:global_step/sec: 175.49
INFO:tensorflow:loss = 0.02654898, step = 36001 (0.571 sec)
INFO:tensorflow:global_step/sec: 175.745
INFO:tensorflow:loss = 0.0047122417, step = 36101 (0.569 sec)
INFO:tensorflow:global_step/sec: 180.506
INFO:tensorflow:loss = 0.0004900115, step = 36201 (0.554 sec)
INFO:tensorflow:global_step/sec: 174.473
INFO:tensorflow:loss = 0.010467996, step = 36301 (0.573 sec)
INFO:tensorflow:global_step/sec: 173.61
INFO:tensorflow:loss = 0.022136906, step = 36401 (0.576 sec)
INFO:tensorflow:global_step/sec: 176.885
INFO:tensorflow:loss = 0.020189352, step = 36501 (0.565 sec)
INFO:tensorflow:global_step/sec: 179.533
INFO:tensorflow:loss = 0.028750751, step = 36601 (0.557 sec)
INFO:tensorflow:global_step/sec: 179.353
INFO:tensorflow:loss = 0.0030443342, step = 36701 (0.557 sec)
INFO:tensorflow:global_step/sec: 148.285
INFO:tensorflow:loss = 0.018922625, step = 36801 (0.675 sec)
INFO:tensorflow:global_step/sec: 176.145
INFO:tensorflow:loss = 0.009772036, step = 36901 (0.569 sec)
INFO:tensorflow:global_step/sec: 173.4
INFO:tensorflow:loss = 0.015412685, step = 37001 (0.576 sec)
INFO:tensorflow:global_step/sec: 173.503
INFO:tensorflow:loss = 0.027635325, step = 37101 (0.576 sec)
INFO:tensorflow:global_step/sec: 179.389
INFO:tensorflow:loss = 0.031917803, step = 37201 (0.557 sec)
INFO:tensorflow:global_step/sec: 176.991
INFO:tensorflow:loss = 0.048122533, step = 37301 (0.565 sec)
INFO:tensorflow:global_step/sec: 161.871
INFO:tensorflow:loss = 0.017368501, step = 37401 (0.618 sec)
INFO:tensorflow:global_step/sec: 164.438
INFO:tensorflow:loss = 0.01038775, step = 37501 (0.609 sec)
INFO:tensorflow:global_step/sec: 159.593
INFO:tensorflow:loss = 0.010371204, step = 37601 (0.626 sec)
INFO:tensorflow:global_step/sec: 173.64
INFO:tensorflow:loss = 0.0038900631, step = 37701 (0.577 sec)
INFO:tensorflow:global_step/sec: 169.159
INFO:tensorflow:loss = 0.024514794, step = 37801 (0.590 sec)
INFO:tensorflow:global_step/sec: 166.466
INFO:tensorflow:loss = 0.0039127036, step = 37901 (0.602 sec)
INFO:tensorflow:global_step/sec: 182.725
INFO:tensorflow:loss = 0.012871689, step = 38001 (0.546 sec)
INFO:tensorflow:global_step/sec: 152.788
INFO:tensorflow:loss = 0.0067911954, step = 38101 (0.655 sec)
INFO:tensorflow:global_step/sec: 156.983
INFO:tensorflow:loss = 0.027267054, step = 38201 (0.638 sec)
INFO:tensorflow:global_step/sec: 155.773
INFO:tensorflow:loss = 0.0081559345, step = 38301 (0.640 sec)
INFO:tensorflow:global_step/sec: 135.183
INFO:tensorflow:loss = 0.03497915, step = 38401 (0.739 sec)
INFO:tensorflow:global_step/sec: 164.823
INFO:tensorflow:loss = 0.02037967, step = 38501 (0.612 sec)
INFO:tensorflow:global_step/sec: 130.908
INFO:tensorflow:loss = 0.008812608, step = 38601 (0.760 sec)
INFO:tensorflow:global_step/sec: 144.311
INFO:tensorflow:loss = 0.010545054, step = 38701 (0.693 sec)
INFO:tensorflow:global_step/sec: 148.069
INFO:tensorflow:loss = 0.005212416, step = 38801 (0.678 sec)
INFO:tensorflow:global_step/sec: 134.229
INFO:tensorflow:loss = 0.01127672, step = 38901 (0.745 sec)
INFO:tensorflow:global_step/sec: 135.53
INFO:tensorflow:loss = 0.021696523, step = 39001 (0.737 sec)
INFO:tensorflow:global_step/sec: 135.284
INFO:tensorflow:loss = 0.013601624, step = 39101 (0.741 sec)
INFO:tensorflow:global_step/sec: 138.063
INFO:tensorflow:loss = 0.0051733158, step = 39201 (0.721 sec)
INFO:tensorflow:global_step/sec: 139.468
INFO:tensorflow:loss = 0.01302847, step = 39301 (0.718 sec)
INFO:tensorflow:global_step/sec: 132.626
INFO:tensorflow:loss = 0.02420042, step = 39401 (0.763 sec)
INFO:tensorflow:global_step/sec: 134.534
INFO:tensorflow:loss = 0.02074557, step = 39501 (0.737 sec)
INFO:tensorflow:global_step/sec: 136.383
INFO:tensorflow:loss = 0.048180066, step = 39601 (0.729 sec)
INFO:tensorflow:global_step/sec: 141.556
INFO:tensorflow:loss = 0.02745619, step = 39701 (0.717 sec)
INFO:tensorflow:global_step/sec: 142.488
INFO:tensorflow:loss = 0.017327156, step = 39801 (0.699 sec)
INFO:tensorflow:global_step/sec: 149.57
INFO:tensorflow:loss = 0.0096516935, step = 39901 (0.663 sec)
INFO:tensorflow:global_step/sec: 127.59
INFO:tensorflow:loss = 0.005243919, step = 40001 (0.789 sec)
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 40081 vs previous value: 40081. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
INFO:tensorflow:global_step/sec: 143.341
INFO:tensorflow:loss = 0.008023078, step = 40101 (0.695 sec)
INFO:tensorflow:global_step/sec: 158.075
INFO:tensorflow:loss = 0.019137057, step = 40201 (0.667 sec)
INFO:tensorflow:global_step/sec: 151.024
INFO:tensorflow:loss = 0.0057572504, step = 40301 (0.625 sec)
INFO:tensorflow:global_step/sec: 157.145
INFO:tensorflow:loss = 0.018355042, step = 40401 (0.636 sec)
INFO:tensorflow:global_step/sec: 171.651
INFO:tensorflow:loss = 0.02420875, step = 40501 (0.584 sec)
INFO:tensorflow:global_step/sec: 148.334
INFO:tensorflow:loss = 0.0015896545, step = 40601 (0.673 sec)
INFO:tensorflow:global_step/sec: 167.975
INFO:tensorflow:loss = 0.016948504, step = 40701 (0.595 sec)
INFO:tensorflow:global_step/sec: 173.31
INFO:tensorflow:loss = 0.010341845, step = 40801 (0.577 sec)
INFO:tensorflow:global_step/sec: 174.047
INFO:tensorflow:loss = 0.008499557, step = 40901 (0.575 sec)
INFO:tensorflow:global_step/sec: 145.137
INFO:tensorflow:loss = 0.05183633, step = 41001 (0.689 sec)
INFO:tensorflow:global_step/sec: 153.066
INFO:tensorflow:loss = 0.004924167, step = 41101 (0.653 sec)
INFO:tensorflow:global_step/sec: 153.611
INFO:tensorflow:loss = 0.0079441285, step = 41201 (0.651 sec)
INFO:tensorflow:global_step/sec: 161.173
INFO:tensorflow:loss = 0.012561766, step = 41301 (0.620 sec)
INFO:tensorflow:global_step/sec: 167.477
INFO:tensorflow:loss = 0.0029684836, step = 41401 (0.596 sec)
INFO:tensorflow:global_step/sec: 142.463
INFO:tensorflow:loss = 0.013587948, step = 41501 (0.705 sec)
INFO:tensorflow:global_step/sec: 142.819
INFO:tensorflow:loss = 0.006644853, step = 41601 (0.703 sec)
INFO:tensorflow:global_step/sec: 146.443
INFO:tensorflow:loss = 0.011492284, step = 41701 (0.685 sec)
INFO:tensorflow:global_step/sec: 131.58
INFO:tensorflow:loss = 0.013362874, step = 41801 (0.753 sec)
INFO:tensorflow:global_step/sec: 172.379
INFO:tensorflow:loss = 0.02212747, step = 41901 (0.579 sec)
INFO:tensorflow:global_step/sec: 178.754
INFO:tensorflow:loss = 0.038400866, step = 42001 (0.561 sec)
INFO:tensorflow:global_step/sec: 178.259
INFO:tensorflow:loss = 0.0006877725, step = 42101 (0.559 sec)
INFO:tensorflow:global_step/sec: 166.455
INFO:tensorflow:loss = 0.0021239985, step = 42201 (0.602 sec)
INFO:tensorflow:global_step/sec: 168.633
INFO:tensorflow:loss = 0.0016488982, step = 42301 (0.593 sec)
INFO:tensorflow:global_step/sec: 161.474
INFO:tensorflow:loss = 0.024730055, step = 42401 (0.620 sec)
INFO:tensorflow:global_step/sec: 163.131
INFO:tensorflow:loss = 0.004936693, step = 42501 (0.612 sec)
INFO:tensorflow:global_step/sec: 164.27
INFO:tensorflow:loss = 0.0025615648, step = 42601 (0.610 sec)
INFO:tensorflow:global_step/sec: 172.232
INFO:tensorflow:loss = 0.007736537, step = 42701 (0.579 sec)
INFO:tensorflow:global_step/sec: 170.941
INFO:tensorflow:loss = 0.004188291, step = 42801 (0.585 sec)
INFO:tensorflow:global_step/sec: 167.974
INFO:tensorflow:loss = 0.027176084, step = 42901 (0.595 sec)
INFO:tensorflow:global_step/sec: 172.908
INFO:tensorflow:loss = 0.008039431, step = 43001 (0.578 sec)
INFO:tensorflow:global_step/sec: 172.74
INFO:tensorflow:loss = 0.012057447, step = 43101 (0.579 sec)
INFO:tensorflow:global_step/sec: 174.52
INFO:tensorflow:loss = 0.031883642, step = 43201 (0.572 sec)
INFO:tensorflow:global_step/sec: 170.792
INFO:tensorflow:loss = 0.01543546, step = 43301 (0.587 sec)
INFO:tensorflow:global_step/sec: 172.076
INFO:tensorflow:loss = 0.018820006, step = 43401 (0.581 sec)
INFO:tensorflow:global_step/sec: 173.01
INFO:tensorflow:loss = 0.019853186, step = 43501 (0.578 sec)
INFO:tensorflow:global_step/sec: 177.771
INFO:tensorflow:loss = 0.011458083, step = 43601 (0.563 sec)
INFO:tensorflow:global_step/sec: 172.125
INFO:tensorflow:loss = 0.03594597, step = 43701 (0.581 sec)
INFO:tensorflow:global_step/sec: 177.916
INFO:tensorflow:loss = 0.019198285, step = 43801 (0.561 sec)
INFO:tensorflow:global_step/sec: 177.674
INFO:tensorflow:loss = 0.0013176335, step = 43901 (0.564 sec)
INFO:tensorflow:Saving checkpoints for 44000 into C:\Users\SAWADA~1\AppData\Local\Temp\tmpsk0m16lo\model.ckpt.
INFO:tensorflow:Loss for final step: 0.0022679986.
Out[15]:
<tensorflow_estimator.python.estimator.canned.dnn.DNNClassifier at 0x2723c5659e8>
In [16]:
test_input_fn =tf.compat.v1.estimator.inputs.pandas_input_fn(
    x={"X": X_test}, y=y_test, shuffle=False)
#test_input_fn = tf.estimator.inputs.numpy_input_fn(
#   x={"X": X_test}, y=y_test, shuffle=False)
eval_results = dnn_clf.evaluate(input_fn=test_input_fn)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-16-a5e378725956> in <module>()
----> 1 test_input_fn =tf.compat.v1.estimator.inputs.pandas_input_fn(
      2     x={"X": X_test}, y=y_test, shuffle=False)
      3 #test_input_fn = tf.estimator.inputs.numpy_input_fn(
      4 #   x={"X": X_test}, y=y_test, shuffle=False)
      5 eval_results = dnn_clf.evaluate(input_fn=test_input_fn)

AttributeError: module 'tensorflow_core.compat.v1.compat' has no attribute 'v1'
In [ ]:
eval_results
In [ ]:
y_pred_iter = dnn_clf.predict(input_fn=test_input_fn)
y_pred = list(y_pred_iter)
y_pred[0]

10.3 Using plain TensorFlow

ここでは、ネットアーキテクチャを自分で定義した、ミニバッチ勾配降下法でMNISTデータセットを学習したモデルを構築する。

10.3.1 構築フェーズ

In [17]:
#import tensorflow as tf

n_inputs = 28*28  # MNIST
n_hidden1 = 300 #隠れ層のニューロンの数を指定
n_hidden2 = 100
n_outputs = 10
In [18]:
reset_graph()

# プレースホルダーノードを使って、訓練データと、ターゲットノードを表現する。
# xをプレースホルダーにしたのは、個々の訓練バッチに含まれるインスタンス数がまだ分かっていないから(yも同様で、次元数が1であること以外知らない)
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y")
In [19]:
# これから出力層や隠れ層を定義していくにあたり、1つの層を作るための汎用関数を用意する
def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.get_shape()[1])
        stddev = 2 / np.sqrt(n_inputs)
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
        W = tf.Variable(init, name="kernel") #重み行列を格納する
        b = tf.Variable(tf.zeros([n_neurons]), name="bias") #バイアス用のための変数.ニューロンごとに一つずつバイアスパラメータが作成
        Z = tf.matmul(X, W) + b # Z = X*W + bを計算するサブグラフを作成
        if activation is not None:
            return activation(Z) 
        else:
            return Z
In [20]:
# 深層ニューラルネットの作成
with tf.name_scope("dnn"):
    hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu) # 入力層x -> 隠れ層1 .活性化関数はReLu関数
    hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu) # 隠れ層1 -> 隠れ層2
    logits = neuron_layer(hidden2, n_outputs, name="outputs") #ソフトマックス活性化関数に食わせる前の、ネットワークの出力値
In [21]:
# コスト関数(交差エントロピー)を定義する. 
#  おさらい:交差エントロピーはターゲットクラスに属する確率を低く推定したモデルにペナルティを与える。
with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
                                                              logits=logits)
    # 出力は各インスタンスの交差エントロピーが収められた一次元テンソル.reduce_mean()により、全てのインスタンスの平均誤差エントロピーを算出.
    loss = tf.reduce_mean(xentropy, name="loss")
In [22]:
# 9章で習ったように、モデルパラメータを操作して、コスト関数を最小化するGradientDescentOptimizer(コスト関数から勾配を導いてくれる手法/オプティマイザの一つ)を定義する
learning_rate = 0.01

with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)
In [23]:
# モデルの評価方法の決定
with tf.name_scope("eval"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
In [24]:
init = tf.global_variables_initializer()
saver = tf.train.Saver()

10.3.2 実行フェーズ

In [25]:
n_epochs = 40  # エポック数
batch_size = 50 # ミニバッチサイズ
In [26]:
# データをシャッフルし一度に一個のミニバッチをロードするための関数を用意する
def shuffle_batch(X, y, batch_size):
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch
In [27]:
with tf.Session() as sess:
    init.run()
    
    # 訓練するめのループ...エポックごとに、訓練セットのサイズに合わせた数のミニバッチを反復処理する
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        # ミニバッチとテストセットを対象としてモデルを評価
        acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)

    save_path = saver.save(sess, "./my_model_final.ckpt")
0 Batch accuracy: 0.9 Val accuracy: 0.9146
1 Batch accuracy: 0.92 Val accuracy: 0.936
2 Batch accuracy: 0.96 Val accuracy: 0.945
3 Batch accuracy: 0.92 Val accuracy: 0.9512
4 Batch accuracy: 0.98 Val accuracy: 0.956
5 Batch accuracy: 0.96 Val accuracy: 0.9566
6 Batch accuracy: 1.0 Val accuracy: 0.961
7 Batch accuracy: 0.94 Val accuracy: 0.963
8 Batch accuracy: 0.98 Val accuracy: 0.9648
9 Batch accuracy: 0.96 Val accuracy: 0.9662
10 Batch accuracy: 0.92 Val accuracy: 0.9688
11 Batch accuracy: 0.98 Val accuracy: 0.9694
12 Batch accuracy: 0.98 Val accuracy: 0.967
13 Batch accuracy: 0.98 Val accuracy: 0.9704
14 Batch accuracy: 1.0 Val accuracy: 0.9716
15 Batch accuracy: 0.94 Val accuracy: 0.973
16 Batch accuracy: 1.0 Val accuracy: 0.9732
17 Batch accuracy: 1.0 Val accuracy: 0.9742
18 Batch accuracy: 1.0 Val accuracy: 0.9744
19 Batch accuracy: 0.98 Val accuracy: 0.9744
20 Batch accuracy: 1.0 Val accuracy: 0.9752
21 Batch accuracy: 1.0 Val accuracy: 0.9756
22 Batch accuracy: 0.98 Val accuracy: 0.9766
23 Batch accuracy: 0.98 Val accuracy: 0.9752
24 Batch accuracy: 0.98 Val accuracy: 0.9768
25 Batch accuracy: 1.0 Val accuracy: 0.9766
26 Batch accuracy: 0.98 Val accuracy: 0.978
27 Batch accuracy: 1.0 Val accuracy: 0.9772
28 Batch accuracy: 0.96 Val accuracy: 0.9754
29 Batch accuracy: 0.98 Val accuracy: 0.978
30 Batch accuracy: 1.0 Val accuracy: 0.9756
31 Batch accuracy: 0.98 Val accuracy: 0.9774
32 Batch accuracy: 0.98 Val accuracy: 0.9772
33 Batch accuracy: 0.98 Val accuracy: 0.979
34 Batch accuracy: 1.0 Val accuracy: 0.9784
35 Batch accuracy: 1.0 Val accuracy: 0.9784
36 Batch accuracy: 0.98 Val accuracy: 0.9788
37 Batch accuracy: 1.0 Val accuracy: 0.978
38 Batch accuracy: 1.0 Val accuracy: 0.9792
39 Batch accuracy: 1.0 Val accuracy: 0.9778

10.3.3 ニューラルネットワークを実際に使ってみる

前節でニューラルネットワークを訓練したので、訓練したモデルを使って予測してみよう

In [28]:
with tf.Session() as sess:
    saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
    X_new_scaled = X_test[:20] # 予測するための新しいデータ
    Z = logits.eval(feed_dict={X: X_new_scaled})
    y_pred = np.argmax(Z, axis=1)
INFO:tensorflow:Restoring parameters from ./my_model_final.ckpt
In [29]:
print("Predicted classes:", y_pred)
print("Actual classes:   ", y_test[:20])
Predicted classes: [7 2 1 0 4 1 4 9 6 9 0 6 9 0 1 5 9 7 3 4]
Actual classes:    [7 2 1 0 4 1 4 9 5 9 0 6 9 0 1 5 9 7 3 4]
In [30]:
from tensorflow_graph_in_jupyter import show_graph
In [32]:
#import tensorflow as tf
show_graph(tf.get_default_graph())
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-32-43ea8d72d2d2> in <module>()
      1 import tensorflow as tf
----> 2 show_graph(tf.get_default_graph())

AttributeError: module 'tensorflow' has no attribute 'get_default_graph'

10.4 ニューラルネットワークのハイパーパラメータの操作

層の数、層ごとのニューロンの数、各層で使う活性化関数のタイプ、重みの初期化ロジックなど多くの設定項目がある。

どう組み合わせを見つけるかだが、今までのようにグリットサーチなどの手法と交差検証を使えば適切なパラメータを見つけられるかもしれないが、多すぎて時間がかかる。

なので、ランダムサーチを使った方がはるかに良い。

また、個々のハイパーパラメータについてある程度の知見があれば、探索空間を狭めることができる。

10.4.1 隠れ層の数

浅い層 VS 深い層...これまでの研究で、深い層の方が指数的に少ないニューロン数で複雑な関数をモデリングでき、その分早く訓練できる(=パラメータ効率が高い)

なぜだおうか。何らかのソフトウェアを使って森の絵を描くタスクを頼まれたとする。木を一本一本、枝も一本一本描かなければならない。、もし葉を一枚描いたらそれを利用して枝を、次は枝を使って木を...このように再利用を階層的に繰り返すことでこのタスクを終えられるだろう。

現実世界のデータは、このように階層構造になっていることが多く、深いネットワークを構築する = DNNはその事実を表現できるため、良い解に早く収束する助けになる。

このような階層構造は新しいデータセットに対する汎化性能も高める。 例えば、写真に含まれる顔を認識する訓練済モデルがあった場合、別のタスク(ヘアスタイルを認識する)用の新しいネットワークの作成し、下位階層のネットワークの結果を再利用することができる。

同じようなタスクをこなす事前訓練済みのネットワーク部品を再利用することがはるかに多い。 そうすれば、訓練はずっと早く済ませられ、必要なデータは減る。

10.4.2 隠れ層あたりのニューロン数

入力層と出力層のニューロン数は、当然ながらタスクが必要とする入力と出力のタイプによって決まる。

隠れ層のニューロン数については、層ごとにニューロン数を減らして漏斗のような形になるようにするのが一般的だ。

依然として、完璧なニューロン数を見つけることは困難なため、過学習するまで層の数を増やしていく。(一般に、ニューロンの数を増やすより、層の数を増やした方が効果が得られる)

実際に必要な数よりも多くの層とニューロンを持つモデルを選らび、早期打ち切りを使って過剰適合を防ぐようにするほうが簡単だ。 これは「ストレッチパンツアプローチ」と呼ばれる。(自分のサイズに合うパンツを探す時間を浪費するより、ちょっと大きいストレッチサイズを買えば、適切なサイズに縮んでくれる。)

10.4.3 活性化関数

隠れ層の場合は、ほとんどの場合はRELu関数を使う。

理由は、「計算にかかる時間が他の活性化関数より僅かながら短く」、「大きな入力値に対して飽和しない」ため、勾配降下法が台地にひっかからない。(それに対しロジスティック関数や双曲線正弦関数は1で飽和する)

出力層については、分類タスクなら、一般にソフトマックス関数、回帰タスクあんら出力に活性化関数を使わなくてよい。

Using dense() instead of neuron_layer()

Note: previous releases of the book used tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function, except for a few minor differences:

  • several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
  • the default activation is now None rather than tf.nn.relu.
  • a few more differences are presented in chapter 11.
In [33]:
n_inputs = 28*28  # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
In [36]:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

reset_graph()

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y") 
In [37]:
with tf.name_scope("dnn"):
    hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
                              activation=tf.nn.relu)
    hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
                              activation=tf.nn.relu)
    logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
    y_proba = tf.nn.softmax(logits)
WARNING:tensorflow:From <ipython-input-37-28239acea1fc>:3: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From C:\Users\sawadayuki\Anaconda3\lib\site-packages\tensorflow_core\python\layers\core.py:187: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.__call__` method instead.
In [38]:
with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")
In [39]:
learning_rate = 0.01

with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)
In [40]:
with tf.name_scope("eval"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
In [41]:
init = tf.global_variables_initializer()
saver = tf.train.Saver()
In [42]:
n_epochs = 20
n_batches = 50

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
        print(epoch, "Batch accuracy:", acc_batch, "Validation accuracy:", acc_valid)

    save_path = saver.save(sess, "./my_model_final.ckpt")
0 Batch accuracy: 0.9 Validation accuracy: 0.9028
1 Batch accuracy: 0.92 Validation accuracy: 0.9252
2 Batch accuracy: 0.94 Validation accuracy: 0.9374
3 Batch accuracy: 0.9 Validation accuracy: 0.942
4 Batch accuracy: 0.94 Validation accuracy: 0.9472
5 Batch accuracy: 0.94 Validation accuracy: 0.951
6 Batch accuracy: 1.0 Validation accuracy: 0.955
7 Batch accuracy: 0.94 Validation accuracy: 0.9612
8 Batch accuracy: 0.96 Validation accuracy: 0.962
9 Batch accuracy: 0.94 Validation accuracy: 0.9652
10 Batch accuracy: 0.92 Validation accuracy: 0.9654
11 Batch accuracy: 0.98 Validation accuracy: 0.9668
12 Batch accuracy: 0.98 Validation accuracy: 0.9684
13 Batch accuracy: 0.98 Validation accuracy: 0.9702
14 Batch accuracy: 1.0 Validation accuracy: 0.9696
15 Batch accuracy: 0.94 Validation accuracy: 0.9716
16 Batch accuracy: 0.98 Validation accuracy: 0.9728
17 Batch accuracy: 1.0 Validation accuracy: 0.9728
18 Batch accuracy: 0.98 Validation accuracy: 0.9744
19 Batch accuracy: 0.98 Validation accuracy: 0.9758
In [ ]:
show_graph(tf.get_default_graph())

Exercise solutions

1. to 8.

See appendix A.

9.

Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on).

First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a tf.summary.scalar() to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.

In [ ]:
n_inputs = 28*28  # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
In [ ]:
reset_graph()

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y") 
In [ ]:
with tf.name_scope("dnn"):
    hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
                              activation=tf.nn.relu)
    hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
                              activation=tf.nn.relu)
    logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
In [ ]:
with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")
    loss_summary = tf.summary.scalar('log_loss', loss)
In [ ]:
learning_rate = 0.01

with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)
In [ ]:
with tf.name_scope("eval"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
    accuracy_summary = tf.summary.scalar('accuracy', accuracy)
In [ ]:
init = tf.global_variables_initializer()
saver = tf.train.Saver()

Now we need to define the directory to write the TensorBoard logs to:

In [ ]:
from datetime import datetime

def log_dir(prefix=""):
    now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
    root_logdir = "tf_logs"
    if prefix:
        prefix += "-"
    name = prefix + "run-" + now
    return "{}/{}/".format(root_logdir, name)
In [ ]:
logdir = log_dir("mnist_dnn")

Now we can create the FileWriter that we will use to write the TensorBoard logs:

In [ ]:
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())

Hey! Why don't we implement early stopping? For this, we are going to need to use the validation set.

In [ ]:
m, n = X_train.shape
In [ ]:
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))

checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"

best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50

with tf.Session() as sess:
    if os.path.isfile(checkpoint_epoch_path):
        # if the checkpoint file exists, restore the model and load the epoch number
        with open(checkpoint_epoch_path, "rb") as f:
            start_epoch = int(f.read())
        print("Training was interrupted. Continuing at epoch", start_epoch)
        saver.restore(sess, checkpoint_path)
    else:
        start_epoch = 0
        sess.run(init)

    for epoch in range(start_epoch, n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
        file_writer.add_summary(accuracy_summary_str, epoch)
        file_writer.add_summary(loss_summary_str, epoch)
        if epoch % 5 == 0:
            print("Epoch:", epoch,
                  "\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
                  "\tLoss: {:.5f}".format(loss_val))
            saver.save(sess, checkpoint_path)
            with open(checkpoint_epoch_path, "wb") as f:
                f.write(b"%d" % (epoch + 1))
            if loss_val < best_loss:
                saver.save(sess, final_model_path)
                best_loss = loss_val
            else:
                epochs_without_progress += 5
                if epochs_without_progress > max_epochs_without_progress:
                    print("Early stopping")
                    break
In [ ]:
os.remove(checkpoint_epoch_path)
In [ ]:
with tf.Session() as sess:
    saver.restore(sess, final_model_path)
    accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
In [ ]:
accuracy_val